skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Balcas, Justas"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. De_Vita, R; Espinal, X; Laycock, P; Shadura, O (Ed.)
    The efficiency of high energy physics workflows relies on the ability to rapidly transfer data among the sites where the data is processed and analyzed. The best data transfer tools should provide a simple and reliable solution for local, regional, national and in some cases intercontinental data transfers. This work outlines the results of data transfer tool tests using internal and external (simulated latency and packet loss) in 100 Gbps testbeds and compares the results among the existing solutions, while also treating the issue of tuning parameters and methods to help optimize the rates of transfers. Many tools have been developed to facilitate data transfers over wide area networks. However, few studies have shown the tools’ requirements, use cases, and reliability through comparative measurements. Here, we were evaluating a variety of high-performance data transfer tools used today in the LHC and other scientific communities, such as FDT, WDT, and NDN in different environments. Furthermore, this test was made to reproduce real-world data transfer examples to analyse each tool’s strengths and weaknesses, including the fault tolerance of the tools when we have packet loss. By comparing the tools in a controlled environment, we can shed light on the tool’s relative reliability and usability for academia and industry. Also, this work highlights the best tuning parameters for WAN and LAN transfers for maximum performance, in several cases. 
    more » « less
  2. De_Vita, R; Espinal, X; Laycock, P; Shadura, O (Ed.)
    Due to the increased demand of network traffic expected during the HL-LHC era, the T2 sites in the USA will be required to have 400Gbps of available bandwidth to their storage solution. With the above in mind we are pursuing a scale test of XRootD software when used to perform Third Party Copy transfers using the HTTP protocol. Our main objective is to understand the possible limitations in the software stack to achieve the target transfer rate; to that end we have set up a testbed of multiple XRootD servers in both UCSD and Caltech which are connected through a dedicated link capable of 400 Gbps end-to-end. Building upon our experience deploying containerized XRootD servers, we use Kubernetes to easily deploy and test different configurations of our testbed. In this work, we will present our experience doing these tests and the lessons learned. 
    more » « less
  3. De_Vita, R; Espinal, X; Laycock, P; Shadura, O (Ed.)
    This work presents the design and implementation of an Open Storage System plugin for XRootD, utilizing Named Data Networking (NDN). This represents a significant step in integrating NDN, a prominent future Internet architecture, with the established data management systems within CMS. We show that this integration enables XRootD to access data in a location transparent manner, reducing the complexity of data management and retrieval. Our approach includes the creation of the NDNc software library, which bridges the existing NDN C++ library with the high-performance NDN-DPDK data-forwarding system. This paper outlines the design of the plugin and preliminary results of data transfer tests using both internal and external 100 Gbps testbed. 
    more » « less
  4. De_Vita, R; Espinal, X; Laycock, P; Shadura, O (Ed.)
    The Large Hadron Collider (LHC) experiments distribute data by leveraging a diverse array of National Research and Education Networks (NRENs), where experiment data management systems treat networks as a “blackbox” resource. After the High Luminosity upgrade, the Compact Muon Solenoid (CMS) experiment alone will produce roughly 0.5 exabytes of data per year. NREN Networks are a critical part of the success of CMS and other LHC experiments. However, during data movement, NRENs are unaware of data priorities, importance, or need for quality of service, and this poses a challenge for operators to coordinate the movement of data and have predictable data flows across multi-domain networks. The overarching goal of SENSE (The Software-defined network for End-to-end Networked Science at Exascale) is to enable National Labs and universities to request and provision end-to-end intelligent network services for their application workflows leveraging SDN (Software-Defined Networking) capabilities. This work aims to allow LHC Experiments and Rucio, the data management software used by CMS Experiment, to allocate and prioritize certain data transfers over the wide area network. In this paper, we will present the current progress of the integration of SENSE, Multi-domain end-to-end SDN Orchestration with QoS (Quality of Service) capabilities, with Rucio, the data management software used by CMS Experiment. 
    more » « less
  5. Doglioni, C.; Kim, D.; Stewart, G.A.; Silvestris, L.; Jackson, P.; Kamleh, W. (Ed.)
    The University of California system maintains excellent networking between its campuses and a number of other Universities in California, including Caltech, most of them being connected at 100 Gbps. UCSD and Caltech Tier2 centers have joined their disk systems into a single logical caching system, with worker nodes from both sites accessing data from disks at either site. This successful setup has been in place for the last two years. However, coherently managing nodes at multiple physical locations is not trivial and requires an update on the operations model used. The Pacific Research Platform (PRP) provides Kubernetes resource pool spanning resources in the science demilitarized zones (DMZs) in several campuses in California and worldwide. We show how we migrated the XCache services from bare-metal deployments into containers using the PRP cluster. This paper presents the reasoning behind our hardware decisions and the experience in migrating to and operating in a mixed environment. 
    more » « less